167 research outputs found

    On the Generic Capacity of KK-User Symmetric Linear Computation Broadcast

    Full text link
    Linear computation broadcast (LCBC) refers to a setting with dd dimensional data stored at a central server, where KK users, each with some prior linear side-information, wish to retrieve various linear combinations of the data. The goal is to determine the minimum amount of information that must be broadcast to satisfy all the users. The reciprocal of the optimal broadcast cost is the capacity of LCBC. The capacity is known for up to K=3K=3 users. Since LCBC includes index coding as a special case, large KK settings of LCBC are at least as hard as the index coding problem. Instead of the general setting (all instances), by focusing on the generic setting (almost all instances) this work shows that the generic capacity of the symmetric LCBC (where every user has mm' dimensions of side-information and mm dimensions of demand) for large number of users (K>dK>d suffices) is Cg=1/ΔgC_g=1/\Delta_g, where Δg=min{max{0,dm},Km,dmm+m}\Delta_g=\min\left\{\max\{0,d-m'\}, Km, \frac{dm}{m+m'}\right\} is the broadcast cost that is both achievable and unbeatable asymptotically almost surely for large nn, among all LCBC instances with the given parameters p,K,d,m,mp,K,d,m,m'. Relative to baseline schemes of random coding or separate transmissions, CgC_g shows an extremal gain by a factor of KK as a function of number of users, and by a factor of d/4\approx d/4 as a function of data dimensions, when optimized over remaining parameters. For arbitrary number of users, the generic capacity of the symmetric LCBC is characterized within a factor of 22

    Interpretable Clustering on Dynamic Graphs with Recurrent Graph Neural Networks

    Full text link
    We study the problem of clustering nodes in a dynamic graph, where the connections between nodes and nodes' cluster memberships may change over time, e.g., due to community migration. We first propose a dynamic stochastic block model that captures these changes, and a simple decay-based clustering algorithm that clusters nodes based on weighted connections between them, where the weight decreases at a fixed rate over time. This decay rate can then be interpreted as signifying the importance of including historical connection information in the clustering. However, the optimal decay rate may differ for clusters with different rates of turnover. We characterize the optimal decay rate for each cluster and propose a clustering method that achieves almost exact recovery of the true clusters. We then demonstrate the efficacy of our clustering algorithm with optimized decay rates on simulated graph data. Recurrent neural networks (RNNs), a popular algorithm for sequence learning, use a similar decay-based method, and we use this insight to propose two new RNN-GCN (graph convolutional network) architectures for semi-supervised graph clustering. We finally demonstrate that the proposed architectures perform well on real data compared to state-of-the-art graph clustering algorithms

    On the Capacity of Secure KK-user Product Computation over a Quantum MAC

    Full text link
    Inspired by a recent study by Christensen and Popovski on secure 22-user product computation for finite-fields of prime-order over a quantum multiple access channel (QMAC), the generalization to KK users and arbitrary finite fields is explored. Combining ideas of batch-processing, quantum 22-sum protocol, a secure computation scheme of Feige, Killian and Naor (FKN), a field-group isomorphism and additive secret sharing, asymptotically optimal (capacity-achieving for large alphabet) schemes are proposed for secure KK-user (any KK) product computation over any finite field. The capacity of modulo-dd (d2d\geq 2) secure KK-sum computation over the QMAC is found to be 2/K2/K computations/qudit as a byproduct of the analysis

    Boosted ab initio Cryo-EM 3D Reconstruction with ACE-EM

    Full text link
    The central problem in cryo-electron microscopy (cryo-EM) is to recover the 3D structure from noisy 2D projection images which requires estimating the missing projection angles (poses). Recent methods attempted to solve the 3D reconstruction problem with the autoencoder architecture, which suffers from the latent vector space sampling problem and frequently produces suboptimal pose inferences and inferior 3D reconstructions. Here we present an improved autoencoder architecture called ACE (Asymmetric Complementary autoEncoder), based on which we designed the ACE-EM method for cryo-EM 3D reconstructions. Compared to previous methods, ACE-EM reached higher pose space coverage within the same training time and boosted the reconstruction performance regardless of the choice of decoders. With this method, the Nyquist resolution (highest possible resolution) was reached for 3D reconstructions of both simulated and experimental cryo-EM datasets. Furthermore, ACE-EM is the only amortized inference method that reached the Nyquist resolution

    Fault-Tolerant Learning for Term Extraction

    Get PDF

    Intersection-free Robot Manipulation with Soft-Rigid Coupled Incremental Potential Contact

    Full text link
    This paper presents a novel simulation platform, ZeMa, designed for robotic manipulation tasks concerning soft objects. Such simulation ideally requires three properties: two-way soft-rigid coupling, intersection-free guarantees, and frictional contact modeling, with acceptable runtime suitable for deep and reinforcement learning tasks. Current simulators often satisfy only a subset of these needs, primarily focusing on distinct rigid-rigid or soft-soft interactions. The proposed ZeMa prioritizes physical accuracy and integrates the incremental potential contact method, offering unified dynamics simulation for both soft and rigid objects. It efficiently manages soft-rigid contact, operating 75x faster than baseline tools with similar methodologies like IPC-GraspSim. To demonstrate its applicability, we employ it for parallel grasp generation, penetrated grasp repair, and reinforcement learning for grasping, successfully transferring the trained RL policy to real-world scenarios

    Learning biological neuronal networks with artificial neural networks: neural oscillations

    Full text link
    First-principles-based modelings have been extremely successful in providing crucial insights and predictions for complex biological functions and phenomena. However, they can be hard to build and expensive to simulate for complex living systems. On the other hand, modern data-driven methods thrive at modeling many types of high-dimensional and noisy data. Still, the training and interpretation of these data-driven models remain challenging. Here, we combine the two types of methods to model stochastic neuronal network oscillations. Specifically, we develop a class of first-principles-based artificial neural networks to provide faithful surrogates to the high-dimensional, nonlinear oscillatory dynamics produced by neural circuits in the brain. Furthermore, when the training data set is enlarged within a range of parameter choices, the artificial neural networks become generalizable to these parameters, covering cases in distinctly different dynamical regimes. In all, our work opens a new avenue for modeling complex neuronal network dynamics with artificial neural networks.Comment: 18 pages, 8 figure

    FedML-HE: An Efficient Homomorphic-Encryption-Based Privacy-Preserving Federated Learning System

    Full text link
    Federated Learning trains machine learning models on distributed devices by aggregating local model updates instead of local data. However, privacy concerns arise as the aggregated local models on the server may reveal sensitive personal information by inversion attacks. Privacy-preserving methods, such as homomorphic encryption (HE), then become necessary for FL training. Despite HE's privacy advantages, its applications suffer from impractical overheads, especially for foundation models. In this paper, we present FedML-HE, the first practical federated learning system with efficient HE-based secure model aggregation. FedML-HE proposes to selectively encrypt sensitive parameters, significantly reducing both computation and communication overheads during training while providing customizable privacy preservation. Our optimized system demonstrates considerable overhead reduction, particularly for large foundation models (e.g., ~10x reduction for ResNet-50, and up to ~40x reduction for BERT), demonstrating the potential for scalable HE-based FL deployment
    corecore